TransWikia.com

Question about backprop algorithm

Data Science Asked by Alex Crim on March 4, 2021

Im using this book to make a very barebones neural net framework to make arbitrary sized networks. But im confused about the back prop algorithm provided and was hoping someone can explain the little things im confused by.

The algorithm in the book is as follows:

image of algorithm from the book

What I am mainly confused about is the nested loop for subsequent layers.

First, the $Delta_{j} leftarrow g'(in_{j}) sum_{i} W_{i,j} Delta_{i}$

The book says earlier that $Delta_{i} = Err^e_{i}$ x $g'(in_{i})$ So will it always be that or should each iteration update the $Delta_{i}$ to become $Delta_{j}$ after?

The second line in that loop $W_{k,j} leftarrow W_{k,j} + alpha$ x $I_{k}$ x $Delta_{j}$ the book says it’s for between input and the next hidden layer. Where $I_{k}$ is that input value. But then what rule do I use for between hidden layer and hidden layer? Do I use that rule but instead of $I_{k}$ being an input would it be that neurons activation value instead?

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP