1. First apply the inputs to the network and work out the output. This initial output could be anything, as the initial weights are random numbers.

2. Next work out the error for neuron B. This error is needed actually ErrorB = OutputB (1-OutputB)(TargetB-OutputB) (5.19) Output (1- Output) is necessary in the equation because of the Sigmoid Function (Target – Output) is needed if threshold activation function is used.

3. Change the weight. Let W+AB be the new (trained) weight and WAB being the initial weights. Notice that it is the output of the connecting neuron (neuron A). Update all the weights in the output layer in this way. W+AB= W+AB + ( ErrorB x OutputA) (5.20)

4. Calculate the Errors for the hidden layer neurons. Unlike the output layer it is not possible to calculate these directly (because there is no Target), so Back Propagate them from the output layer (hence the name of the algorithm). This is done by taking the Errors from the output neurons and running them back through the weights to get the hidden layer errors. For example if neuron A is connected as shown to B and C then take the errors from B and C to generate an error for A.

*…show more content…*

5. Having obtained the Error for the hidden layer neurons now proceed as in stage 3 to change the hidden layer weights. By repeating this method a network can be trained for any number of layers. The Equations (5.19) to (5.21) denotes the calculation of the outputs in the forward propagation