TransWikia.com

scale_pos_weight using XGBoost's Learning API

Data Science Asked by yatu on February 1, 2021

I see it is possible to add a weight for unbalanced problems in XGBoost’s Scikit-Learn API through scale_pos_weight. Does it have an equivalent in the Learning API? If not,

  • is there a reason behind this?
  • Could this corrective factor/weight also be somehow implemented using the learning API?

One Answer

Yes, you can use scale_pos_weight in the native python API; it goes in the params dictionary. E.g.,

params = {'objective': 'binary:logistic',
          'scale_pos_weight': 2.5}
model = xgboost.train(params, dmat)

https://xgboost.readthedocs.io/en/latest/parameter.html#parameters-for-tree-booster https://github.com/dmlc/xgboost/blob/master/demo/kaggle-higgs/speedtest.py

Answered by Ben Reiniger on February 1, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP