class ChunkEvaluator(label_list, suffix=False)[source]

Bases: paddle.metric.metrics.Metric

ChunkEvaluator computes the precision, recall and F1-score for chunk detection. It is often used in sequence tagging tasks, such as Named Entity Recognition(NER).

  • label_list (list) – The label list.

  • suffix (bool) – If set True, the label ends with ‘-B’, ‘-I’, ‘-E’ or ‘-S’, else the label starts with them. Defaults to False.


from paddlenlp.metrics import ChunkEvaluator

num_infer_chunks = 10
num_label_chunks = 9
num_correct_chunks = 8

label_list = [1,1,0,0,1,0,1]
evaluator = ChunkEvaluator(label_list)
evaluator.update(num_infer_chunks, num_label_chunks, num_correct_chunks)
precision, recall, f1 = evaluator.accumulate()
print(precision, recall, f1)
# 0.8 0.8888888888888888 0.8421052631578948
compute(lengths, predictions, labels, dummy=None)[source]

Computes the precision, recall and F1-score for chunk detection.

  • lengths (Tensor) – The valid length of every sequence, a tensor with shape [batch_size]

  • predictions (Tensor) – The predictions index, a tensor with shape [batch_size, sequence_length].

  • labels (Tensor) – The labels index, a tensor with shape [batch_size, sequence_length].

  • dummy (Tensor, optional) – Unnecessary parameter for compatibility with older versions with parameters list inputs, lengths, predictions, labels. Defaults to None.


Returns tuple (num_infer_chunks, num_label_chunks, num_correct_chunks).

With the fields:

  • num_infer_chunks (Tensor):

    The number of the inference chunks.

  • num_label_chunks (Tensor):

    The number of the label chunks.

  • num_correct_chunks (Tensor):

    The number of the correct chunks.

Return type


update(num_infer_chunks, num_label_chunks, num_correct_chunks)[source]

This function takes (num_infer_chunks, num_label_chunks, num_correct_chunks) as input, to accumulate and update the corresponding status of the ChunkEvaluator object. The update method is as follows:

\[\begin{split}\\ \begin{array}{l}{\text { self. num_infer_chunks }+=\text { num_infer_chunks }} \\ {\text { self. num_Label_chunks }+=\text { num_label_chunks }} \\ {\text { self. num_correct_chunks }+=\text { num_correct_chunks }}\end{array} \\\end{split}\]
  • num_infer_chunks (int|numpy.array) – The number of chunks in Inference on the given minibatch.

  • num_label_chunks (int|numpy.array) – The number of chunks in Label on the given mini-batch.

  • num_correct_chunks (int|float|numpy.array) – The number of chunks both in Inference and Label on the given mini-batch.


This function returns the mean precision, recall and f1 score for all accumulated minibatches.


Returns tuple (precision, recall, f1 score).

Return type



Reset function empties the evaluation memory for previous mini-batches.


Return name of metric instance.