AnswerBun.com

What is the Cost Function for Neural Network with Dropout Regularisation?

Cross Validated Asked on January 3, 2022

For some context, I shall outline my current understanding:

Considering a Neural Network, for a Binary Classification problem, the Cross-entropy cost function, J, is defined as:

$ J = frac{-1}{m} sum_{i=1}^m y^i*log(a^i) + (1-y^i)*log(1-a^i) $

  1. m = number of training examples
  2. y = class label (0 or 1)
  3. a = output prediction (value between 0 and 1)

Dropout regularisation works as follows: For a given training example, we randomly shut down some nodes in a layer according to some probability. This has the effect of keeping the weights low during training and hence regularises the network and prevents overfitting.

I have learnt that if we do apply dropout regularisation, the cross entropy cost function is no longer easy to define due to all the intermediate probabilities. Why is this the case? Why doesn’t the old definition still hold? As long as the network learns better parameters, won’t the cross entropy cost decrease on every iteration of Gradient Descent? Thanks in advance.

One Answer

Dropout does not change the cost function, and you do not need to make changes to the cost function when using dropout.

The reasoning is that dropout is a way to average over an ensemble of each of the exponentially-many "thinned" networks resulting from dropping units randomly. In this light, each time you apply dropout and compute the loss, you're computing the loss that corresponds to a randomly-selected thinned network; collecting together many of these losses reflects a distribution of losses over these networks. Of course, the loss surface is noisier as a result, so model training takes longer. The goal of training the network in this way is to obtain a model that is averaged over all of these different "thinned" networks.

For more information, see How to explain dropout regularization in simple terms? or the original paper: Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", Journal of Machine Learning Research, 2014.

Answered by Sycorax on January 3, 2022

Add your own answers!

Related Questions

Segmentation using cluster analysis in SPSS

2  Asked on February 11, 2021 by desperate-about-statistics

     

subspace clustering with density threshold

0  Asked on February 11, 2021 by pianobegginer

 

Extracting a parameter from a probability problem

1  Asked on February 8, 2021 by mishe-mitasek

   

Ask a Question

Get help from others!

© 2023 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP