TransWikia.com

Using Cross Validation technique for a CNN model

Data Science Asked on June 26, 2021

I am working on a CNN model. As always, I used batches with epochs to train my model. When it completed training and validation, finally I used a test set to measure the model performance and generate confusion matrix. Now I want to use Cross Validation to train my model. I can implement it but there are some questions in my mind:

1- Why most CNN models don’t use Cross Validation?

2- If I use Cross Validation, how can I generate the confusion matrix? Can I split dataset to train/test and then do cross validation on train set as train/validation (i.e., doing cross validation as train/validation except for the usual train/test) and at last use test set the same way? Or how?

2 Answers

Question 1: Why do most CNN models not apply the cross-validation technique?

$k$-fold cross-validation is often used for simple models with few parameters, models with simple hyperparameters and additionally the models are easy to optimize. Typical examples are linear regression, logistic regression, small neural networks and support vector machines. For a convolutional neural network with many parameters (e.g. more than one million) we just have too many possible changes in the architecture. What you can do is to do some experiments with the learning rate, batch size, dropout (amount and position) and batch normalization (position). Training a convolutional neural network with a huge dataset takes quite a long time. Doing hyperparameter optimization would just be total overkill. Often in papers, they try to improve the results of other research papers. It is not the goal to get better results by improving the chosen hyperparameters but rather to come up with new ideas to solve the given task but with better accuracy or less computational effort.

Question 2: If I use cross-validation how can I generate confusion matrix? can I split dataset to train/test then do cross-validation on train set as train/validation (i.e. doing cross-validation as train/validation except for the usual train/test) and at last use test set the same way? or how?

In order to do $k$-fold cross validation you will need to split your initial data set into two parts. One dataset for doing the hyperparameter optimization and one for the final validation. Then we take the dataset for the hyperparameter optimization and split it into $k$ (hopefully) equally sized data sets $mathcal{D}_1,mathcal{D}_2,ldots,mathcal{D}_k$. For the sake of clarity let us set $k=3$. Then for each possible hyperparameter combination that we want to test we use $mathcal{D}_1$ and $mathcal{D}_2$ to fit our model and we use $mathcal{D}_3$ to validate our model. Then we do the same with $mathcal{D}_2$ and $mathcal{D}_3$ and use $mathcal{D}_1$ for validation. Then we do the same with $mathcal{D}_1$ and $mathcal{D}_3$ and use $mathcal{D}_2$ for validation. We will get $3$ confusion matrices for every possible hyperparameter configuration. In order to derive a metric from these three results, we take the mean of these confusion matrices. Then we can scan through all averaged confusion matrices so select the hyperparameter configuration that was the best (you have to define what parts of the confusion matrix are important for your problem). Finally, we pick the 'best' hyperparameters and calculate the prediction performance on the final validation set. This performance metrics are the ones that you report.

Correct answer by MachineLearner on June 26, 2021

The previous answer already got accepted, but I am answering this question just to make sure that things are clear. I will go one step deeper which can be helpful to advanced people.

First of all, cross validation is a model selection mechanism that is used mainly to select hyperparameters. Changing hyperparameters will affect the number of parameters in the model. For example, increasing the number of layers in a neural network can introduce thousands more parameters ( depending on the width of the layer).

Second, almost any training algorithm can have unlimited number of possible hyperparameters. To make sure this is clear, let me give an example: in CNN, the number of layers is a hyperparameter that can take in theory any value between 1 and infinite, which means by just changing this hyparameter, I can generate infinite number of models. At the same time, the number of levels (depth) in decision tree is a hyperparameter that can take also a value between 1 and infinite, which means I can generate infinite number of models using decision tree, yet we use cross validation with decision tree but not cnn!!!!

Do not confused hyperparameters with parameters, cross validation has nothing to do with parameters it is only about hyperparameters and different training algorithms. Changing the values of the parameters will be taken care of by training algorithm.

Let us go back to the original question, why do not we use cross validation with CNN?? In fact, the answer to this question is based on a very important concept in machine learning. Variance error vs. Biased error. Let us say you have N models that you trained, they all have variance error and zero biased error, in this case using cross validation to select a model is not useful, but averaging the models is useful. If you have N models that all have different biased errors (non zero), then using cross validation is useful to select the best model, but averaging is harmful. Any time you have models that have different biased errors, use cross validation to determine the best model. Anytime you have models that have variance errors, use averaging to determine the final outcome.

CNN has tendency toward overfitting not underfitting. Today we know that the deeper the network the better, but overfitting is what scares us. CNN are good targets for averaging rather than selection, that is why some times they train four or five models and then they average their outputs.

The concepts to select network architecture, was studied in literature. They made it clear how to select your hyper parameters. In fact, if you have a lot of data just go for larger models.

I recommend you read the following papers: 1- Alex- Hinton paper in 2012, the paper where Alex proposed his network. You will see that most of the tricks they proposed is to deal with overfitting (variance error) and not biased errors. 2- super learner, Super Learner In Prediction This paper explains mathematically what is cross validation. Many people think about cross validation as a set of training/testing experiments that scans a set of parameters and returns the best model, but they ignore if this is enough to guarantee that this is the best model I can get using the training data available. They also ignore all the assumptions that cross validation needs to guarantee that the returned model is the super learner.

Answered by Bashar Haddad on June 26, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP