TransWikia.com

Proper cross validation for stacking models

Cross Validated Asked by Tomek Tarczynski on January 1, 2022

Lets assume that we have dataset that contains continuous variable $Y$ which we want to predict and 10 predictors $X_{1}, …, X_{10}$. The number of observations is $n=1000$. I have questions about proper cross validation in two following situations:

  1. I want to add variable $X_{11}$ which is equal to average of $Y$ from 10 nearest observations (the metric is not important). On this extended dataset I would like to make linear regression. What is the proper way of CV (kfold for $k=5$)?

    • Add $X_{11}$ using whole data set and then do ‘normal’ kfold cross validation? In such situation some information about test set will be included in training part so there will be bias in error.
    • Add $X_{11}$ separately in each fold using only training data. But then the question is how to add $X_{11}$ in test set, should only test set be used? Because of number of cases in test dataset the variable $X_{11}$ might be biased.
  2. Two models were built (for example: random forest and gradient boosting machine) and now want to make linear blending of those two models. What predictions from the model should be put as predictiors? One solution is:

    • Split data set into train/test (800/200), build those two models using training dataset and the blend those probabilities and test final predictions on test dataset. Repeat that 5 times for different folds in test dataset.
      I believe that this solution might not be perfect because random forest tends to overfit on training data set. I feel that it would be better to blend probabilities not from training data set. To overcome this one might do following:
    • Split data set into train/test (800/200). Then do kfold CV on the training data set and use as predictor out of bag estimates (so the split is 640/160). This is much more time consuming solution, but should be more reliable. The drawback is that if we want to make kfold cross validation for k=5 then in the models that are input for blending we will have 16/25 example in the training dataset.

I strongly believe that both cases are well known, but I would like to know what is the state of the art in that matter.

2 Answers

Question 1:

This is really similar to what is called KFold Target Encoding, and a correct way to do it is explained here:

https://medium.com/@pouryaayria/k-fold-target-encoding-dfe9a594874b

Your encoding is slightly different than what is describe in the article above, but you can apply the same design.

Answered by steco on January 1, 2022

Question 1: local prediction & cross validation

Looking for closeby cases and upweighting them for prediction is referred to as local models or local prediction.

For the proper way to do cross validation, remember that for each fold, you only use training cases, and then do with the test cases exactly what you do for prediciton of a new unkown case.

I'd recommend to see the calculation of $X_1$ as part of the prediction. E.g. in a two level model consisting of a $n$ nearest neighbours + a second level model:

  1. For each of the training cases, find the $n$ nearest neighbours and calculate $X_{11}$
  2. Calculate the "2nd level" model based on $X_1, ..., X_{11}$.

So for prediction of a case $X_{new}$, you

  1. find the $n$ nearest neighbours and calculate the $X_{11}$ for the new case
  2. then calculate the prediction of the 2nd level model.

You use exactly this prediction procedure to predict the test cases in the cross validation.


Question 2: combining predictions

random forest tends to overfit on training data set

Usually random forest will overfit only in situations where you have a hierarchical/clustered data structer that creates a dependence between (some) rows of your data.
Boosting is more prone to overfitting because of the iteratively weighted average (as opposed to the simple average of the random forest).

I did not yet completely understand your question (see comment). But here's my guess:

I assume you want to find out the optimal weight you should use for random forest and boosted prediction, which is a linear model of those two models. (I don't see how you could use the individual trees within those ensemble models because the trees will totally change between the splits). This again amounts to a 2 level model (or 3 levels if combined with the approach of question 1).

The general answer here is that whenever you do a data-driven model or hyperparameter optimization (e.g. optimize the weights for random forest prediction and gradient boosted prediction by test/cross validation results), you need to do an independent validation to assess the real performance of the resulting model. Thus you need either yet another independent test set, or a so-called nested or double cross validation.

  • So the 1st approach would not work unless you derive the weights from the training data.
  • As you point out for the 2nd approach, having more and more levels of cross validation needs huge sample sizes to start with.

I'd recommend a different approach here: try to cut down as far as possible the number of splits you need by doing as few data-driven hyperparameter calculations or optimizations as possible. There cannot be any discussion about the need of a validation of the final model. But you may be able to show that no inner splitting is needed if you can show that the models you try to stack are not overfit. In addition this would remove the need to stack at all:

Ensemble models only help if the underlying individual models suffer from variance, i.e. are unstable. (Or if they are biased in opposing directions, so the ensembe would roughly cancel the individual biases. I suspect that this is not the case here, assuming that your GBM uses trees like the RF.)
As for the instability, you can measure this easily by repeated aka iterated cross validation (see e.g. this answer). If this does not point to substantial variance in the prediction of the same case by models built on slightly varying training data (i.e. if your RF and GBM are stable), producing an ensemble of the ensemble models is not going to help.

Answered by cbeleites unhappy with SX on January 1, 2022

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP