TransWikia.com

Interpreting hamming loss for multilabel classification

Cross Validated Asked on December 24, 2020

I have a multi label – multi class classifier that aims to predict the top 3 selling products out of 11 possible for a given day.

Using scikit learn’s OneVSRest with XgBoost as an estimator, the model gets a hamming loss of 0.25.

Im not familiar with HL, I have mainly done binary classification with roc_auc in the past.

Is this an okay score and how can I describe the effectiveness of the model?
does it mean that the model predicts 0,25 * 11 = 2,75 labels wrong on average?

One Answer

enter image description here

this slide shows a good example, HL=4/(5*4)=0.2
more information refer to https://users.ics.aalto.fi/jesse/talks/Multilabel-Part01.pdf

Answered by En Ouyang on December 24, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP