Some new neural network architectures with improved learning schemes
- 74 Downloads
Here, we present two new neuron model architectures and one modified form of existing standard feedforward architecture (MSTD). Both the new models use self-scaling scaled conjugate gradient algorithm (SSCGA) and lambda–gamma (L–G) algorithm and entail the properties of basic as well as higher order neurons (i.e., multiplication and the aggregation functions). Of these two, compensatory neural network architecture (CNNA) requires relatively smaller number of inter-neuronal connections, cuts down on the computational budget by almost 50% and speeds up convergence, besides, gives better training and prediction accuracy. The second model sigma–pi–sigma (SPS) ensures faster convergence, better training and prediction accuracy. The third model (MSTD) performs much better than the standard feedforward architecture (STD). The effect of normalizing the outputs for training also studied here shows virtually no improvement, at low iteration level, say ∼500, with increasing range of scaling. Increasing the number of neurons beyond a point also shows to have little effect in the case of higher order neuron.The numerous simulation runs for the problem of satellite orbit determination and the complex XOR problems establishes the robustness of the proposed neuron models architectures.
Unable to display preview. Download preview PDF.