sentiment_analysis_model#

class BoWModel(vocab_size, num_classes, emb_dim=128, padding_idx=0, hidden_size=128, fc_hidden_size=96)[源代码]#

This class implements the Bag of Words Classification Network model to classify texts. At a high level, the model starts by embedding the tokens and running them through a word embedding. Then, we encode these representations with a BoWEncoder. Lastly, we take the output of the encoder to create a final representation, which is passed through some feed-forward layers to output a logits (output_layer). :param vocab_size: The vocab size that used to create the embedding. :type vocab_size: int :param num_class: The num class of the classifier. :type num_class: int :param emb_dim: The size of the embedding, default value is 128. :type emb_dim: int. optional :param padding_idx: The padding value in the embedding, the padding_idx of embedding value will

not be updated, the default value is 0.

参数:
  • hidden_size (int, optional) -- The output size of linear that after the bow, default value is 128.

  • fc_hidden_size (int, optional) -- The output size of linear that after the first linear, default value is 96.

forward(text, seq_len=None)[源代码]#

Defines the computation performed at every call. Should be overridden by all subclasses.

参数:
  • *inputs (tuple) -- unpacked tuple arguments

  • **kwargs (dict) -- unpacked dict arguments

class LSTMModel(vocab_size, num_classes, emb_dim=128, padding_idx=0, lstm_hidden_size=198, direction='forward', lstm_layers=1, dropout_rate=0.0, pooling_type=None, fc_hidden_size=96)[源代码]#

This class implements the Bag of Words Classification Network model to classify texts. At a high level, the model starts by embedding the tokens and running them through a word embedding. Then, we encode these representations with a BoWEncoder. Lastly, we take the output of the encoder to create a final representation, which is passed through some feed-forward layers to output a logits (output_layer). :param vocab_size: The vocab size that used to create the embedding. :type vocab_size: int :param num_class: The num class of the classifier. :type num_class: int :param emb_dim: The size of the embedding, default value is 128. :type emb_dim: int. optional :param padding_idx: The padding value in the embedding, the padding_idx of embedding value will

not be updated, the default value is 0.

参数:
  • lstm_hidden_size (int, optional) -- The output size of the lstm, default value 198.

  • direction (string, optional) -- The direction of lstm, default value is forward.

  • lstm_layers (string, optional) -- The num of lstm layer.

  • dropout (float, optional) -- The dropout rate of lstm.

  • pooling_type (float, optional) -- The pooling type of lstm. Default value is None, if pooling_type is None, then the LSTMEncoder will return the hidden state of the last time step at last layer as a single vector.

forward(text, seq_len)[源代码]#

Defines the computation performed at every call. Should be overridden by all subclasses.

参数:
  • *inputs (tuple) -- unpacked tuple arguments

  • **kwargs (dict) -- unpacked dict arguments

class SkepSequenceModel(config: SkepConfig)[源代码]#
forward(input_ids=None, token_type_ids=None, position_ids=None, attention_mask=None)[源代码]#

Defines the computation performed at every call. Should be overridden by all subclasses.

参数:
  • *inputs (tuple) -- unpacked tuple arguments

  • **kwargs (dict) -- unpacked dict arguments