TransWikia.com

Is boosting and bagging only relevant in the context of decision trees?

Cross Validated Asked on December 5, 2021

In the documents I’ve seen on boosting and bagging, it seems that they’re always talked about in the context of decision trees.

What are some other methods in which the two are applicable?

2 Answers

So bagging and boosting are techniques, these two are aimed at slightly different things. Bagging is aiming to reduce variance of the predictions by creating an ensemble of multiple models. Boosting is aiming at decreasing the bias of the predictions by focusing better on data instances that the previously trained model in the ensemble incorrectly classified.

Now on to your major part of the question. Realistically bagging can be used on any model to create an ensemble. As it focuses on reduction of variance, decision trees which tend to over fit when left with no maxdepth is a good model to use for explanation context. However, you could use Logistic regression, or Neural Networks. So long as the bagging approach is followed you'll get the same results. Note: Logistic Regression really doesn't need it, as they tend to not over fit.

Boosting is again just easier to discuss when talking about decision stumps (1 depth decision trees) as they are simple high bias algorithms which boosting aims at and is good at improving.

Answered by Jesse Ross on December 5, 2021

Boosting and bagging can be applied to various types of classifiers. When a classifier estimates posterior class probabilities, combining several of such classifiers is straight forward. Two classsic references are:

Kittler on combining classifiers and

Hansen on ensembles of neural networks

Answered by Match Maker EE on December 5, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP