TransWikia.com

What is the order of execution of steps in back-propagation algorithm in a neural network?

Artificial Intelligence Asked by gokul on February 4, 2021

I am a machine learning newbie. I am trying to understand the back-propagation algorithm. I have a training dataset of 60 instances/records.

What is the correct order of the process? This one?

  • Forward pass of the first instance. Calculate the error.
  • Weight update using backpropagation.
  • Forward pass of the second instance. Calculate the error.
  • Weight update using backpropagation. And so on…

Or

  • Forward pass of all instances one by one. Keep track of the error as a vector.
  • Weight update using backpropagation.

This video https://www.youtube.com/watch?v=OwdjRYUPngE is similar to the second process. Is it correct?

One Answer

Both are feasible.

A generalization is to apply the modification of the weights after the presentation of $n$ examples. If $n=1$, this is online training. If $n=60$ ($60$, in your case, is the size of the dataset), this is batch training. In other cases, this is mini-batch training.

The main difference is not the computation complexity of the algorithm, but the theoretical speed of convergence to an "optimal" set of weights.

It is now generally admitted that online training is faster. Online training performs a stochastic approximation of the gradient descent method. The true gradient of the cost function over the whole training set is approximated by a gradient at each presentation of a training example.

Intuitively, with batch learning the weights are modified by an average of the gradient over the whole training set and averaging destroys information.

While with stochastic (or mini-batch learning), each example has his word to say. One after the other.

This has also been discussed here.

Answered by jcm69 on February 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP