modeling#

class CodeGenAttention(config: CodeGenConfig)[源代码]#

基类:Layer

forward(hidden_states: Tensor, attention_mask: Tensor | None = None, use_cache: bool | None = False, cache: Tuple[Tensor] | None = None, output_attentions: bool | None = False) Tuple[源代码]#

Defines the computation performed at every call. Should be overridden by all subclasses.

参数:
  • *inputs (tuple) -- unpacked tuple arguments

  • **kwargs (dict) -- unpacked dict arguments

class CodeGenMLP(config: CodeGenConfig)[源代码]#

基类:Layer

forward(hidden_states: Tensor) Tensor[源代码]#

Defines the computation performed at every call. Should be overridden by all subclasses.

参数:
  • *inputs (tuple) -- unpacked tuple arguments

  • **kwargs (dict) -- unpacked dict arguments

class CodeGenBlock(config: CodeGenConfig)[源代码]#

基类:Layer

forward(hidden_states: Tensor, attention_mask: Tensor | None = None, use_cache: bool | None = False, cache: Tuple[Tensor] | None = None, output_attentions: bool | None = False) Tuple[源代码]#

Defines the computation performed at every call. Should be overridden by all subclasses.

参数:
  • *inputs (tuple) -- unpacked tuple arguments

  • **kwargs (dict) -- unpacked dict arguments

class CodeGenPreTrainedModel(*args, **kwargs)[源代码]#

基类:PretrainedModel

An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.

config_class#

CodeGenConfig 的别名

base_model_class#

CodeGenModel 的别名

class CodeGenModel(config: CodeGenConfig)[源代码]#

基类:CodeGenPreTrainedModel

The bare CodeGen Model outputting raw hidden-states. This model inherits from PretrainedModel. Refer to the superclass documentation for the generic methods. This model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior. :param config: An instance of CodeGenConfig used to construct CodeGenModel. :type config: CodeGenConfig

get_input_embeddings()[源代码]#

get input embedding of model

返回:

embedding of model

返回类型:

nn.Embedding

set_input_embeddings(new_embeddings)[源代码]#

set new input embedding for model

参数:

value (Embedding) -- the new embedding of model

抛出:

NotImplementedError -- Model has not implement set_input_embeddings method

forward(input_ids: Tensor | None = None, attention_mask: Tensor | None = None, token_type_ids: Tensor | None = None, use_cache: bool | None = None, cache: List[Tuple[Tensor]] | None = None, inputs_embeds: Tensor | None = None, output_attentions: bool | None = None, output_hidden_states: bool | None = None, return_dict: bool | None = None) Tuple | BaseModelOutputWithPastAndCrossAttentions[源代码]#

The CodeGenModel forward method, overrides the __call__() special method. :param input_ids: Indices of input sequence tokens in the vocabulary. They are

numerical representations of tokens that build the input sequence. Its data type should be int64 and it has a shape of [batch_size, sequence_length].

参数:
  • attention_mask (Tensor, optional) -- Mask used in multi-head attention to avoid performing attention to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the masked tokens have False values and the others have True values. When the data type is int, the masked tokens have 0 values and the others have 1 values. When the data type is float, the masked tokens have -INF values and the others have 0 values. It is a tensor with shape broadcasted to [batch_size, num_attention_heads, sequence_length, sequence_length]. For example, its shape can be [batch_size, sequence_length], [batch_size, sequence_length, sequence_length], [batch_size, num_attention_heads, sequence_length, sequence_length]. Defaults to None, which means nothing needed to be prevented attention to.

  • use_cache (bool, optional) -- Whether or not to use cache. Defaults to False. If set to True, key value states will be returned and can be used to speed up decoding.

  • cache (list, optional) -- It is a list, and each element in the list is a tuple (incremental_cache, static_cache). See TransformerDecoder.gen_cache for more details. It is only used for inference and should be None for training. Default to None.

  • inputs_embeds (Tensor, optional) -- Optionally, instead of passing input_ids you can choose to directly pass an embedded representation of shape (batch_size, sequence_length, hidden_size). This is useful if you want more control over how to convert input_ids indices into associated vectors than the model's internal embedding lookup matrix. Default to None.

  • output_attentions (bool, optional) -- Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. Defaults to False.

  • output_hidden_states (bool, optional) -- Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. Defaults to False.

  • return_dict (bool, optional) -- Whether to return a BaseModelOutputWithPastAndCrossAttentions object. If False, the output will be a tuple of tensors. Defaults to False.

返回:

An instance of BaseModelOutputWithPastAndCrossAttentions if return_dict=True. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields of BaseModelOutputWithPastAndCrossAttentions. Especially, When return_dict=output_hidden_states=output_attentions=False and cache=None, returns a tensor representing the output of CodeGenModel. Its data type should be float32 and has a shape of [batch_size, sequence_length, hidden_size].

示例

class CodeGenForCausalLM(config: CodeGenConfig)[源代码]#

基类:CodeGenPreTrainedModel

CodeGen Model with a language modeling head on top. :param config: An instance of CodeGenConfig used to construct CodeGenForCausalLM. :type config: CodeGenConfig

get_output_embeddings()[源代码]#

To be overwrited for models with output embeddings

返回:

the otuput embedding of model

返回类型:

Optional[Embedding]

forward(input_ids: Tensor | None = None, attention_mask: Tensor | None = None, token_type_ids: Tensor | None = None, use_cache: bool | None = None, cache: List[Tuple[Tensor]] | None = None, labels: Tensor | None = None, inputs_embeds: Tensor | None = None, output_attentions: bool | None = None, output_hidden_states: bool | None = None, return_dict: bool | None = None) Tuple | CausalLMOutputWithCrossAttentions[源代码]#

The CodeGenForCausalLM forward method, overrides the __call__() special method. :param input_ids: See CodeGenModel. :type input_ids: Tensor, optional :param attention_mask: See CodeGenModel. :type attention_mask: Tensor, optional :param use_cache: See CodeGenModel. :type use_cache: bool, optional :param cache: See CodeGenModel. :type cache: Tensor, optional :param labels: (Tensor, optional):

Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., vocab_size]

参数:
  • inputs_embeds (Tensor, optional) -- See CodeGenModel.

  • output_attentions (bool, optional) -- See :class: CodeGenModel.

  • output_hidden_states (bool, optional) -- See :class: CodeGenModel.

  • return_dict (bool, optional) -- See :class: CodeGenModel.

返回:

An instance of CausalLMOutputWithPastAndCrossAttentions if return_dict=True. Otherwise it returns a tuple of tensors corresponding to ordered and not None (depending on the input arguments) fields of CausalLMOutputWithPastAndCrossAttentions. Especially, When return_dict=output_hidden_states=output_attentions=False and cache=labels=None, returns tensor lm_logits of shape [batch_size, sequence_length, vocab_size],

示例