TransWikia.com

Sklearn classification report is not printing the micro avg score for multi class classification model

Data Science Asked on April 16, 2021

There are 6 class labels encoded as 0,1,2,3,4,5

While executing classification report score it outputs accuracy,macro avg,weighted avg .The micro average score is missing in the output . Im not sure why micro average score is not getting printed . What should i do to get micro average score as well ?

print(classification_report(y_test, best_preds,labels=[0,1,2,3,4,5]))

          precision    recall  f1-score   support
       0       0.65      0.76      0.70        46
       1       0.74      0.56      0.64        41
       2       0.60      0.68      0.64        41
       3       0.65      0.59      0.62        41
       4       0.75      0.79      0.77        61
       5       0.76      0.70      0.73        40

accuracy                           0.69       270
macro avg      0.69      0.68      0.68       270
weighted avg   0.69      0.69      0.69       270

I went through the sklearn documentation and i changed the class labels from 0-5 to 1-6 to just simply see what scores are printed by classification report and to my suprise the micro average score was printed in the output but the entire score in this classification report is wrong as the class labels are 0-5 and not 1-6 that is why there is a warning printed in the output as well .

print(classification_report(y_test, best_preds,labels=[1,2,3,4,5,6]))

          precision    recall  f1-score   support
       1       0.74      0.56      0.64        41
       2       0.60      0.68      0.64        41
       3       0.65      0.59      0.62        41
       4       0.75      0.79      0.77        61
       5       0.76      0.70      0.73        40
       6       0.00      0.00      0.00         0

micro avg       0.70      0.67      0.69       224
macro avg       0.58      0.55      0.56       224
weighted avg    0.70      0.67      0.69       224

UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
‘precision’, ‘predicted’, average, warn_for)
UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples.
‘recall’, ‘true’, average, warn_for)

One Answer

The micro-averaged precision, recall, and F1-score are all the same, and are all equal to the overall accuracy (!) when including all the classes.

That fact is hinted at in the documentation for classification_report, in the "Return" field ("it is only shown for multi-label or multi-class with a subset of classes because it is accuracy otherwise"). It's explained in a little more detail at https://towardsdatascience.com/multi-class-metrics-made-simple-part-ii-the-f1-score-ebe8b2c2ca1 (near the end).

Answered by Ben Reiniger on April 16, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP