Quantcast
Channel: How does BackProp avoid bias for the last input used for training? - Artificial Intelligence Stack Exchange
Browsing latest articles
Browse All 2 View Live

Answer by cinch for How does BackProp avoid bias for the last input used for...

The concern you've raised touches on the method of training known as online learning or stochastic gradient descent (SGD), where weights are updated after each individual training example. It's a...

View Article


How does BackProp avoid bias for the last input used for training?

Here's a BackProp Algo definition from here:Initially all the edge weights are randomly assigned. For every input in the training dataset, the ANN is activated and its output is observed. This output...

View Article
Browsing latest articles
Browse All 2 View Live